iT邦幫忙

2023 iThome 鐵人賽

DAY 25
0
DevOps

大家都在用 Terraform 實作 IaC 為什麼不將程式寫得更簡潔易讀呢?系列 第 25

實作 AWS 常用服務之 Terraform 模組系列 - EKS with Karpenter 篇

  • 分享至 

  • xImage
  •  

AWS EKS with Karpenter 模組實作

本篇是實作常用的 AWS EKS with Karpenter 服務之 Terraform 模組,完整的專案程式碼分享在我的 Github 上。

在 AWS EKS 使用 Karpenter 來代替 AWS EKS 中的 Node Group 具有一些潛在的好處和用例,但也伴隨一些考慮和限制。

下面是一些使用 Karpenter 的好處:

  • 更靈活的調度和自動縮放:Karpenter 是一個 Kubernetes 預測性自動縮放器,它可以更精確地根據 Pod 的資源需求和集群中的負載來動態擴展和收縮節點。這使得資源的利用更高效,無需手動管理 Node Group 的大小。

  • 跨集群使用:Karpenter 不僅適用於 AWS EKS,還適用於其他 Kubernetes 集群。這意味著如果您有多個 Kubernetes 集群,您可以在所有集群中使用相同的調度器,提供一致的資源管理。

  • 資源成本優化:Karpenter 可以基於資源需求和 Pod 的優先級來管理節點,從而幫助您更好地控制資源成本。它可以智能地維護最低的節點數,以滿足應用程序的需求。

  • 支持多雲和混合雲環境:Karpenter 不限於 AWS,它還可以用於其他雲提供商或自行托管的 Kubernetes 集群,這使得它在多雲或混合雲環境中非常有用。

  • 更先進的調度策略:Karpenter 允許您定義更進階的調度策略,例如 bin-packing、spread、daemonset 等,以滿足特定應用程序的需求。


以下來說明如何實作 Karpenter 的 Terraform 模組:

  1. 先定義模組 my_karpenter 的放置位置 modules/my_karpenter:
├── configs
│   ├── cloudfront
│   │   └── distributions.yaml
│   ├── cloudwatch
│   │   └── loggroups.yaml
│   ├── iam
│   │   ├── assume_role_policies
│   │   │   ├── eks-cluster.json
│   │   │   ├── eks-fargate-pod-execution-role.json
│   │   │   └── eks-node-group.json
│   │   ├── iam.yaml
│   │   ├── role_policies
│   │   │   └── eks-cluster-cloudwatch-metrics.json
│   │   └── user_policies
│   │       └── admin_access.json
│   ├── kinesis
│   │   └── streams.yaml
│   ├── kms
│   │   ├── keys.yaml
│   │   └── policies
│   │       └── my-key-policy.json
│   ├── s3
│   │   ├── policies
│   │   │   └── my-bucket.json
│   │   └── s3.yaml
│   ├── subnet
│   │   └── my-subnets.yaml
│   └── vpc
│       └── my-vpcs.yaml
├── example.tfvars
├── locals.tf
├── main.tf
├── modules
│   ├── my_aws_load_balancer_controller
│   ├── my_cloudfront
│   ├── my_cloudwatch
│   ├── my_eips
│   ├── my_eks
│   ├── my_eventbridge
│   ├── my_iam
│   ├── my_igw
│   ├── my_instances
│   ├── my_karpenter
│   │   ├── 1.install-karpenter.sh
│   │   ├── 2.create-provision.sh
│   │   ├── karpenter_cloudformation.tf
│   │   ├── karpenter_helm.tf
│   │   ├── provider.tf
│   │   └── variables.tf
│   ├── my_kinesis_stream
│   ├── my_kms
│   ├── my_msk
│   ├── my_nacls
│   ├── my_route_tables
│   ├── my_s3
│   ├── my_subnets
│   └── my_vpc
├── my-ingress-controller-values.yaml
├── my-ingress-node-red.yaml
├── packer
│   └── apache-cassandra
└── variables.tf
  1. 撰寫 my_karpenter 模組:
  • ./modules/my_karpenter/provider.tf:
data "aws_eks_cluster_auth" "eks_auth" {
  name = var.eks_cluster_name

  depends_on = [
    var.eks_cluster_endpoint,
    var.eks_ca_certificate
  ]
}

data "aws_caller_identity" "current" {}

provider "kubernetes" {
  host                      = var.eks_cluster_endpoint
  cluster_ca_certificate    = base64decode(var.eks_ca_certificate)
  token                     = data.aws_eks_cluster_auth.eks_auth.token
  #load_config_file          = false
}

provider "helm" {
  kubernetes {
    host                   = var.eks_cluster_endpoint
    token                  = data.aws_eks_cluster_auth.eks_auth.token
    cluster_ca_certificate = base64decode(var.eks_ca_certificate)
  }
}
  • ./modules/my_karpenter/variables.tf:
variable "aws_region" {
    type        = string
    description = "AWS Region"
    default     = "ap-northeast-1"
}

variable "chart_version" {
    type        = string
    description = "Chart Version of karpenter/karpenter"
    default     = "v0.20.0"
}

variable "namespace" {
  type        = string
  default     = "karpenter"
  description = "Namespace of karpenter/karpenter"
}

variable "create_namespace" {
  type        = bool
  default     = false
  description = "Needed to Create Namespace of karpenter/karpenter"
}

variable "eks_cluster_name" {
    description = "Name of the EKS Cluster"
}

variable "eks_cluster_endpoint" {
    type        = string
    description = "EKS Cluster Endpoint"
}

variable "eks_oidc_url" {
    type        = string
    description = "EKS Cluster OIDC Provider URL"
}

variable "eks_ca_certificate" {
    type        = string
    description = "EKS Cluster CA Certificate"
}

  • ./modules/my_karpenter/karpenter_cloudformation.tf:
# KarpenterNodeInstanceProfile
resource "aws_iam_instance_profile" "karpenter_node_instance_profile" {
  name = "KarpenterNodeInstanceProfile-${var.eks_cluster_name}"
  path = "/"
  role = aws_iam_role.karpenter_node_role.name
}

# KarpenterNodeRole
resource "aws_iam_role" "karpenter_node_role" {
  assume_role_policy = <<POLICY
{
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      }
    }
  ],
  "Version": "2012-10-17"
}
POLICY

  managed_policy_arns  = [
    "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
    "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
    "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
    "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  ]
  max_session_duration = "3600"
  name                 = "KarpenterNodeRole-${var.eks_cluster_name}"
  path                 = "/"
}

# KarpenterControllerPolicy
resource "aws_iam_policy" "karpenter_controller_policy" {
  name = "KarpenterControllerPolicy-${var.eks_cluster_name}"
  path = "/"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowScopedEC2InstanceActions",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:ec2:${var.aws_region}::image/*",
        "arn:aws:ec2:${var.aws_region}::snapshot/*",
        "arn:aws:ec2:${var.aws_region}:*:spot-instances-request/*",
        "arn:aws:ec2:${var.aws_region}:*:security-group/*",
        "arn:aws:ec2:${var.aws_region}:*:subnet/*",
        "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
      ],
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateFleet"
      ]
    },
    {
      "Sid": "AllowScopedEC2LaunchTemplateActions",
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:${var.aws_region}:*:launch-template/*",
      "Action": "ec2:CreateLaunchTemplate",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
        },
        "StringLike": {
          "aws:RequestTag/karpenter.sh/provisioner-name": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedEC2InstanceActionsWithTags",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:ec2:${var.aws_region}:*:fleet/*",
        "arn:aws:ec2:${var.aws_region}:*:instance/*",
        "arn:aws:ec2:${var.aws_region}:*:volume/*",
        "arn:aws:ec2:${var.aws_region}:*:network-interface/*"
      ],
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateFleet"
      ],
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
        },
        "StringLike": {
          "aws:RequestTag/karpenter.sh/provisioner-name": "*"
        }
      }
    },
    {
      "Sid": "AllowScopedResourceCreationTagging",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:ec2:${var.aws_region}:*:fleet/*",
        "arn:aws:ec2:${var.aws_region}:*:instance/*",
        "arn:aws:ec2:${var.aws_region}:*:volume/*",
        "arn:aws:ec2:${var.aws_region}:*:network-interface/*",
        "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
      ],
      "Action": "ec2:CreateTags",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned",
          "ec2:CreateAction": [
            "RunInstances",
            "CreateFleet",
            "CreateLaunchTemplate"
          ]
        },
        "StringLike": {
          "aws:RequestTag/karpenter.sh/provisioner-name": "*"
        }
      }
    },
    {
      "Sid": "AllowMachineMigrationTagging",
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:${var.aws_region}:*:instance/*",
      "Action": "ec2:CreateTags",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned",
          "aws:RequestTag/karpenter.sh/managed-by": "${var.eks_cluster_name}"
        },
        "StringLike": {
          "aws:RequestTag/karpenter.sh/provisioner-name": "*"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "karpenter.sh/provisioner-name",
            "karpenter.sh/managed-by"
          ]
        }
      }
    },
    {
      "Sid": "AllowScopedDeletion",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:ec2:${var.aws_region}:*:instance/*",
        "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
      ],
      "Action": [
        "ec2:TerminateInstances",
        "ec2:DeleteLaunchTemplate"
      ],
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
        },
        "StringLike": {
          "aws:ResourceTag/karpenter.sh/provisioner-name": "*"
        }
      }
    },
    {
      "Sid": "AllowRegionalReadActions",
      "Effect": "Allow",
      "Resource": "*",
      "Action": [
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeImages",
        "ec2:DescribeInstances",
        "ec2:DescribeInstanceTypeOfferings",
        "ec2:DescribeInstanceTypes",
        "ec2:DescribeLaunchTemplates",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSpotPriceHistory",
        "ec2:DescribeSubnets"
      ],
      "Condition": {
        "StringEquals": {
          "aws:RequestedRegion": "${var.aws_region}"
        }
      }
    },
    {
      "Sid": "AllowGlobalReadActions",
      "Effect": "Allow",
      "Resource": "*",
      "Action": [
        "pricing:GetProducts",
        "ssm:GetParameter"
      ]
    },
    {
      "Sid": "AllowInterruptionQueueActions",
      "Effect": "Allow",
      "Resource": "${aws_sqs_queue.karpenter_interruption_queue.arn}",
      "Action": [
        "sqs:DeleteMessage",
        "sqs:GetQueueAttributes",
        "sqs:GetQueueUrl",
        "sqs:ReceiveMessage"
      ]
    },
    {
      "Sid": "AllowPassingInstanceRole",
      "Effect": "Allow",
      "Resource": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/KarpenterNodeRole-${var.eks_cluster_name}",
      "Action": "iam:PassRole",
      "Condition": {
        "StringEquals": {
          "iam:PassedToService": "ec2.amazonaws.com"
        }
      }
    },
    {
      "Sid": "AllowAPIServerEndpointDiscovery",
      "Effect": "Allow",
      "Resource": "arn:aws:eks:${var.aws_region}:${data.aws_caller_identity.current.account_id}:cluster/${var.eks_cluster_name}",
      "Action": "eks:DescribeCluster"
    }
  ]
}
POLICY
}

# KarpenterInterruptionQueue
# KarpenterInterruptionQueuePolicy
resource "aws_sqs_queue" "karpenter_interruption_queue" {
  content_based_deduplication       = "false"
  delay_seconds                     = "0"
  fifo_queue                        = "false"
  kms_data_key_reuse_period_seconds = "300"
  max_message_size                  = "262144"
  message_retention_seconds         = "300"
  name                              = var.eks_cluster_name

  policy = <<POLICY
{
  "Id": "EC2InterruptionPolicy",
  "Statement": [
    {
      "Action": "sqs:SendMessage",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "sqs.amazonaws.com",
          "events.amazonaws.com"
        ]
      },
      "Resource": "arn:aws:sqs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${var.eks_cluster_name}"
    }
  ],
  "Version": "2008-10-17"
}
POLICY

  receive_wait_time_seconds  = "0"
  sqs_managed_sse_enabled    = "true"
  visibility_timeout_seconds = "30"
}

# InstanceStateChangeRule
resource "aws_cloudwatch_event_rule" "karpenter_instance_state_change_rule" {
  event_bus_name = "default"
  event_pattern  = "{\"detail-type\":[\"EC2 Instance State-change Notification\"],\"source\":[\"aws.ec2\"]}"
  is_enabled     = "true"
  name           = "karpenter_${var.eks_cluster_name}-InstanceStateChangeRule"
}

resource "aws_cloudwatch_event_target" "karpenter_instance_state_change_rule_karpenter_interruption_queue_target" {
  arn       = aws_sqs_queue.karpenter_interruption_queue.arn
  rule      = aws_cloudwatch_event_rule.karpenter_instance_state_change_rule.name
  target_id = "KarpenterInterruptionQueueTarget"
}

# RebalanceRule
resource "aws_cloudwatch_event_rule" "karpenter_rebalance_rule" {
  event_bus_name = "default"
  event_pattern  = "{\"detail-type\":[\"EC2 Instance Rebalance Recommendation\"],\"source\":[\"aws.ec2\"]}"
  is_enabled     = "true"
  name           = "karpenter_${var.eks_cluster_name}-RebalanceRule"
}

resource "aws_cloudwatch_event_target" "karpenter_rebalance_rule_karpenter_interruption_queue_target" {
  arn       = aws_sqs_queue.karpenter_interruption_queue.arn
  rule      = aws_cloudwatch_event_rule.karpenter_rebalance_rule.name
  target_id = "KarpenterInterruptionQueueTarget"
}

# ScheduledChangeRule
resource "aws_cloudwatch_event_rule" "karpenter_scheduled_change_rule" {
  event_bus_name = "default"
  event_pattern  = "{\"detail-type\":[\"AWS Health Event\"],\"source\":[\"aws.health\"]}"
  is_enabled     = "true"
  name           = "karpenter_${var.eks_cluster_name}-ScheduledChangeRule"
}

resource "aws_cloudwatch_event_target" "karpenter_scheduled_change_rule_karpenter_interruption_queue_target" {
  arn       = aws_sqs_queue.karpenter_interruption_queue.arn
  rule      = aws_cloudwatch_event_rule.karpenter_scheduled_change_rule.name
  target_id = "KarpenterInterruptionQueueTarget"
}

# SpotInterruptionRule
resource "aws_cloudwatch_event_rule" "karpenter_spot_interruption_rule" {
  event_bus_name = "default"
  event_pattern  = "{\"detail-type\":[\"EC2 Spot Instance Interruption Warning\"],\"source\":[\"aws.ec2\"]}"
  is_enabled     = "true"
  name           = "karpenter_${var.eks_cluster_name}-SpotInterruptionRule"
}

resource "aws_cloudwatch_event_target" "karpenter_spot_interruption_rule_karpenter_interruption_queue_target" {
  arn       = aws_sqs_queue.karpenter_interruption_queue.arn
  rule      = aws_cloudwatch_event_rule.karpenter_spot_interruption_rule.name
  target_id = "KarpenterInterruptionQueueTarget"
}

  • ./modules/my_karpenter/karpenter_helm.tf:
data "kubernetes_config_map" "aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  depends_on = [
    var.eks_cluster_endpoint,
    var.eks_ca_certificate
  ]
}

data "aws_iam_policy_document" "karpenter_controller_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(var.eks_oidc_url, "https://", "")}:sub"
      values   = ["system:serviceaccount:karpenter:karpenter"]
    }

    principals {
      identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${replace(var.eks_oidc_url, "https://", "")}"]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role" "karpenter_controller_role" {
  name                 = "${var.eks_cluster_name}-karpenter"
  path                 = "/"
  max_session_duration = 3600

  assume_role_policy = join("", data.aws_iam_policy_document.karpenter_controller_assume_role_policy.*.json)

  tags = {
    "alpha.eksctl.io/cluster-name"                = var.eks_cluster_name
    "alpha.eksctl.io/eksctl-version"              = "0.112.0"
    "alpha.eksctl.io/iamserviceaccount-name"      = "karpenter/karpenter"
    "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = var.eks_cluster_name
  }
}

resource "aws_iam_role_policy_attachment" "karpenter_controller_policy_attachment" {
  policy_arn = aws_iam_policy.karpenter_controller_policy.arn
  role       = aws_iam_role.karpenter_controller_role.name
}

resource "kubernetes_service_account" "karpenter" {
  automount_service_account_token = true
  metadata {
    name      = "karpenter"
    namespace = "karpenter"
    annotations = {
      "meta.helm.sh/release-name"      = "karpenter"
      "meta.helm.sh/release-namespace" = "karpenter"
      "eks.amazonaws.com/role-arn"     = aws_iam_role.karpenter_controller_role.arn
    }
    labels = {
      "app.kubernetes.io/instance"   = "karpenter"
      "app.kubernetes.io/name"       = "karpenter"
      "app.kubernetes.io/component"  = "controller"
      "app.kubernetes.io/managed-by" = "Helm"
      "app.kubernetes.io/version"    = trim(var.chart_version, "v")
      "helm.sh/chart"                = "karpenter-${var.chart_version}"
    }
  }
}

resource "null_resource" "helm_experimental_oci" {
  triggers = {
    random = uuid()
  }
  provisioner "local-exec" {
    command = <<-EOT
      export HELM_EXPERIMENTAL_OCI=1
    EOT
  }

  depends_on = [
    data.kubernetes_config_map.aws_auth
  ]
}

resource "helm_release" "karpenter" {
  name             = "karpenter"
  repository       = "oci://public.ecr.aws/karpenter"
  chart            = "karpenter"
  version          = var.chart_version
  namespace        = var.namespace
  create_namespace = var.create_namespace

  set {
    name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = aws_iam_role.karpenter_controller_role.arn
  }

  set {
    name  = "settings.aws.clusterName"
    value = var.eks_cluster_name
  }

  set {
    name  = "settings.aws.clusterEndpoint"
    value = var.eks_cluster_endpoint
  }

  set {
    name  = "settings.aws.defaultInstanceProfile"
    value = aws_iam_instance_profile.karpenter_node_instance_profile.name
  }

  set {
    name  = "settings.aws.interruptionQueueName"
    value = var.eks_cluster_name 
  }

  depends_on = [
    null_resource.helm_experimental_oci,
    var.eks_cluster_endpoint,
    var.eks_ca_certificate
  ]
}

# Need to added by manually
# resource "kubernetes_config_map" "aws_auth" {
#   metadata {
#     name = "aws-auth"
#     namespace = "kube-system"
#   }

#   data = {
#     "mapRoles" = <<EOF
# ${data.kubernetes_config_map.aws_auth.data.mapRoles}
# - groups:
#     - system:bootstrappers
#     - system:nodes
#   rolearn: arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/KarpenterNodeRole-${var.eks_cluster_name}
#   username: system:node:{{EC2PrivateDNSName}}
# EOF
#   }

#    depends_on = [
#     data.kubernetes_config_map.aws_auth
#    ]
# }

  1. 其中於第2步中,後面暫時註解的地方由於 aws-auth ConfigMap 已經存在,故需要用手動的方式參考以下的說明自行變更上去。
# 輸入以下指令編輯 aws-auth ConfigMap
$ kubectl edit cm aws-auth -n kube-system

# 於 data -> mapRoles 插入以這段設定
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/KarpenterNodeRole-<EKS_CLUSTER_NAME>
      username: system:node:{{EC2PrivateDNSName}}
  1. 撰寫專案相關程式
  • example.tfvars:
    因為我們要改用 Karpenter 所以要把原有的 Node Group 註解掉後,留下一個空的 list
    另外設定 Karpenter 讓它可以用執行於 Fargate Pod,由於將原有的 NodeGroup 中的 desired_nodesmin_nodes 調整為 0,此外再增加兩組 Fargate Profile for corednsaws-load-balancer-controller
aws_region="ap-northeast-1"
aws_profile="<YOUR_PROFILE>"
project_name="example"
department_name="SRE"
cassandra_root_password="<CASSANDRA_ROOT_PASSWORD>"
  • main.tf:
terraform {
  required_providers {
    aws = {
      version = "5.15.0"
    }
  }

  backend "s3" {
    bucket                  = "<YOUR_S3_BUCKET_NAME>"
    dynamodb_table          = "<YOUR_DYNAMODB_TABLE_NAME>"
    key                     = "terraform.tfstate"
    region                  = "ap-northeast-1"
    shared_credentials_file = "~/.aws/config"
    profile                 = "<YOUR_PROFILE>"
  }
}

# 其他模組省略...

# eks
module "eks" {
  aws_region       = var.aws_region
  aws_profile      = var.aws_profile
  cluster_name     = "MY-EKS-CLUSTER"
  cluster_role_arn = module.iam.iam_role_arn["eks-cluster"].arn

  endpoint_private_access = true

  public_subnets = [
    module.subnet.subnets["my-public-ap-northeast-1a"].id,
    module.subnet.subnets["my-public-ap-northeast-1c"].id,
    module.subnet.subnets["my-public-ap-northeast-1d"].id
  ]

  public_access_cidrs = local.bastion_allowed_ips

  private_subnets = [
    module.subnet.subnets["my-application-ap-northeast-1a"].id,
    module.subnet.subnets["my-application-ap-northeast-1c"].id,
    module.subnet.subnets["my-application-ap-northeast-1d"].id
  ]

  eks_version = "1.25"


  node_groups = [
    {
      name           = "ng-arm-spot"
      node_role_arn  = module.iam.iam_role_arn["eks-node-group"].arn
      ami_type       = "AL2_ARM_64"
      capacity_type  = "SPOT" # ON_DEMAND or SPOT
      instance_types = ["t4g.small"]
      disk_size      = 20
      desired_nodes  = 0
      max_nodes      = 2
      min_nodes      = 0
      labels         = {}
      taint = [
        {
          key    = "spotInstance"
          value  = "true"
          effect = "PREFER_NO_SCHEDULE"
        }
      ]
    }
  ]

  fargate_profiles = [
    {
      name                   = "karpenter",
      namespace              = "karpenter",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {}
    },
    {
      name                   = "coredns",
      namespace              = "kube-system",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {
        k8s-app = "kube-dns"
      }
    },
    {
      name                   = "aws-load-balancer-controller",
      namespace              = "kube-system",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {
        "app.kubernetes.io/name" = "aws-load-balancer-controller"
      }
    }
  ]

  source = "./modules/my_eks"
}

# aws_load_balancer_controller
module "aws_load_balancer_controller" {
  aws_region            = var.aws_region
  vpc_id                = module.vpc.my_vpcs["my-vpc"].id
  vpc_cidr              = module.vpc.my_vpcs["my-vpc"].cidr_block
  eks_cluster_name      = module.eks.cluster_name
  eks_cluster_endpoint  = module.eks.endpoint
  eks_oidc_url          = module.eks.oidc_url
  eks_ca_certificate    = module.eks.ca_certificate

  source                = "./modules/my_aws_load_balancer_controller"
}

# karpenter
module "karpenter" {
  aws_region           = var.aws_region
  create_namespace     = true
  eks_cluster_name     = module.eks.cluster_name
  eks_cluster_endpoint = module.eks.endpoint
  eks_oidc_url         = module.eks.oidc_url
  eks_ca_certificate   = module.eks.ca_certificate

  source = "./modules/my_karpenter"
}


Terraform 執行計畫

  1. 安裝 Karpenter in AWS EKS Cluster 中,於專案目錄下執行 terraform init && terraform plan --out .plan -var-file=example.tfvars 來確認一下結果:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create
  - destroy

Terraform will perform the following actions:

  # module.eks.aws_eks_fargate_profile.profiles["aws-load-balancer-controller"] will be created
  + resource "aws_eks_fargate_profile" "profiles" {
      + arn                    = (known after apply)
      + cluster_name           = "MY-EKS-CLUSTER"
      + fargate_profile_name   = "aws-load-balancer-controller"
      + id                     = (known after apply)
      + pod_execution_role_arn = "arn:aws:iam::597635706810:role/eks-fargate-pod-execution-role"
      + status                 = (known after apply)
      + subnet_ids             = [
          + "subnet-02e2a73f157cce8ba",
          + "subnet-068ec8e9ec1f4ed8b",
          + "subnet-0a7ba0f71e5500d41",
        ]
      + tags_all               = (known after apply)

      + selector {
          + labels    = {
              + "app.kubernetes.io/name" = "aws-load-balancer-controller"
            }
          + namespace = "kube-system"
        }
    }

  # module.eks.aws_eks_fargate_profile.profiles["coredns"] will be created
  + resource "aws_eks_fargate_profile" "profiles" {
      + arn                    = (known after apply)
      + cluster_name           = "MY-EKS-CLUSTER"
      + fargate_profile_name   = "coredns"
      + id                     = (known after apply)
      + pod_execution_role_arn = "arn:aws:iam::597635706810:role/eks-fargate-pod-execution-role"
      + status                 = (known after apply)
      + subnet_ids             = [
          + "subnet-02e2a73f157cce8ba",
          + "subnet-068ec8e9ec1f4ed8b",
          + "subnet-0a7ba0f71e5500d41",
        ]
      + tags_all               = (known after apply)

      + selector {
          + labels    = {
              + "k8s-app" = "kube-dns"
            }
          + namespace = "kube-system"
        }
    }

  # module.eks.aws_eks_node_group.groups["ng-arm-spot"] will be destroyed
  # (because key ["ng-arm-spot"] is not in for_each map)
  - resource "aws_eks_node_group" "groups" {
      - ami_type        = "AL2_ARM_64" -> null
      - arn             = "arn:aws:eks:ap-northeast-1:597635706810:nodegroup/MY-EKS-CLUSTER/ng-arm-spot/0ec56389-e571-f6d0-323d-09bb2a423fad" -> null
      - capacity_type   = "SPOT" -> null
      - cluster_name    = "MY-EKS-CLUSTER" -> null
      - disk_size       = 20 -> null
      - id              = "MY-EKS-CLUSTER:ng-arm-spot" -> null
      - instance_types  = [
          - "t4g.small",
        ] -> null
      - labels          = {} -> null
      - node_group_name = "ng-arm-spot" -> null
      - node_role_arn   = "arn:aws:iam::597635706810:role/eks-node-group" -> null
      - release_version = "1.25.13-20230919" -> null
      - resources       = [
          - {
              - autoscaling_groups              = [
                  - {
                      - name = "eks-ng-arm-spot-0ec56389-e571-f6d0-323d-09bb2a423fad"
                    },
                ]
              - remote_access_security_group_id = ""
            },
        ] -> null
      - status          = "ACTIVE" -> null
      - subnet_ids      = [
          - "subnet-02e2a73f157cce8ba",
          - "subnet-068ec8e9ec1f4ed8b",
          - "subnet-0a7ba0f71e5500d41",
        ] -> null
      - tags            = {} -> null
      - tags_all        = {} -> null
      - version         = "1.25" -> null

      - scaling_config {
          - desired_size = 1 -> null
          - max_size     = 2 -> null
          - min_size     = 1 -> null
        }

      - taint {
          - effect = "PREFER_NO_SCHEDULE" -> null
          - key    = "spotInstance" -> null
          - value  = "true" -> null
        }

      - update_config {
          - max_unavailable            = 1 -> null
          - max_unavailable_percentage = 0 -> null
        }
    }

  # module.karpenter.aws_cloudwatch_event_rule.karpenter_instance_state_change_rule will be created
  + resource "aws_cloudwatch_event_rule" "karpenter_instance_state_change_rule" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + event_pattern  = jsonencode(
            {
              + detail-type = [
                  + "EC2 Instance State-change Notification",
                ]
              + source      = [
                  + "aws.ec2",
                ]
            }
        )
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "karpenter_MY-EKS-CLUSTER-InstanceStateChangeRule"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

  # module.karpenter.aws_cloudwatch_event_rule.karpenter_rebalance_rule will be created
  + resource "aws_cloudwatch_event_rule" "karpenter_rebalance_rule" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + event_pattern  = jsonencode(
            {
              + detail-type = [
                  + "EC2 Instance Rebalance Recommendation",
                ]
              + source      = [
                  + "aws.ec2",
                ]
            }
        )
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "karpenter_MY-EKS-CLUSTER-RebalanceRule"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

  # module.karpenter.aws_cloudwatch_event_rule.karpenter_scheduled_change_rule will be created
  + resource "aws_cloudwatch_event_rule" "karpenter_scheduled_change_rule" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + event_pattern  = jsonencode(
            {
              + detail-type = [
                  + "AWS Health Event",
                ]
              + source      = [
                  + "aws.health",
                ]
            }
        )
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "karpenter_MY-EKS-CLUSTER-ScheduledChangeRule"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

  # module.karpenter.aws_cloudwatch_event_rule.karpenter_spot_interruption_rule will be created
  + resource "aws_cloudwatch_event_rule" "karpenter_spot_interruption_rule" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + event_pattern  = jsonencode(
            {
              + detail-type = [
                  + "EC2 Spot Instance Interruption Warning",
                ]
              + source      = [
                  + "aws.ec2",
                ]
            }
        )
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "karpenter_MY-EKS-CLUSTER-SpotInterruptionRule"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

  # module.karpenter.aws_cloudwatch_event_target.karpenter_instance_state_change_rule_karpenter_interruption_queue_target will be created
  + resource "aws_cloudwatch_event_target" "karpenter_instance_state_change_rule_karpenter_interruption_queue_target" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + id             = (known after apply)
      + rule           = "karpenter_MY-EKS-CLUSTER-InstanceStateChangeRule"
      + target_id      = "KarpenterInterruptionQueueTarget"
    }

  # module.karpenter.aws_cloudwatch_event_target.karpenter_rebalance_rule_karpenter_interruption_queue_target will be created
  + resource "aws_cloudwatch_event_target" "karpenter_rebalance_rule_karpenter_interruption_queue_target" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + id             = (known after apply)
      + rule           = "karpenter_MY-EKS-CLUSTER-RebalanceRule"
      + target_id      = "KarpenterInterruptionQueueTarget"
    }

  # module.karpenter.aws_cloudwatch_event_target.karpenter_scheduled_change_rule_karpenter_interruption_queue_target will be created
  + resource "aws_cloudwatch_event_target" "karpenter_scheduled_change_rule_karpenter_interruption_queue_target" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + id             = (known after apply)
      + rule           = "karpenter_MY-EKS-CLUSTER-ScheduledChangeRule"
      + target_id      = "KarpenterInterruptionQueueTarget"
    }

  # module.karpenter.aws_cloudwatch_event_target.karpenter_spot_interruption_rule_karpenter_interruption_queue_target will be created
  + resource "aws_cloudwatch_event_target" "karpenter_spot_interruption_rule_karpenter_interruption_queue_target" {
      + arn            = (known after apply)
      + event_bus_name = "default"
      + id             = (known after apply)
      + rule           = "karpenter_MY-EKS-CLUSTER-SpotInterruptionRule"
      + target_id      = "KarpenterInterruptionQueueTarget"
    }

  # module.karpenter.aws_iam_instance_profile.karpenter_node_instance_profile will be created
  + resource "aws_iam_instance_profile" "karpenter_node_instance_profile" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = "KarpenterNodeInstanceProfile-MY-EKS-CLUSTER"
      + name_prefix = (known after apply)
      + path        = "/"
      + role        = "KarpenterNodeRole-MY-EKS-CLUSTER"
      + tags_all    = (known after apply)
      + unique_id   = (known after apply)
    }

  # module.karpenter.aws_iam_policy.karpenter_controller_policy will be created
  + resource "aws_iam_policy" "karpenter_controller_policy" {
      + arn         = (known after apply)
      + id          = (known after apply)
      + name        = "KarpenterControllerPolicy-MY-EKS-CLUSTER"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "ec2:CreateLaunchTemplate",
                          + "ec2:CreateFleet",
                          + "ec2:RunInstances",
                          + "ec2:CreateTags",
                          + "ec2:TerminateInstances",
                          + "ec2:DeleteLaunchTemplate",
                          + "ec2:DescribeLaunchTemplates",
                          + "ec2:DescribeInstances",
                          + "ec2:DescribeSecurityGroups",
                          + "ec2:DescribeSubnets",
                          + "ec2:DescribeImages",
                          + "ec2:DescribeInstanceTypes",
                          + "ec2:DescribeInstanceTypeOfferings",
                          + "ec2:DescribeAvailabilityZones",
                          + "ec2:DescribeSpotPriceHistory",
                          + "ssm:GetParameter",
                          + "pricing:GetProducts",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action   = [
                          + "sqs:DeleteMessage",
                          + "sqs:GetQueueUrl",
                          + "sqs:GetQueueAttributes",
                          + "sqs:ReceiveMessage",
                        ]
                      + Effect   = "Allow"
                      + Resource = "arn:aws:sqs:ap-northeast-1:597635706810:MY-EKS-CLUSTER"
                    },
                  + {
                      + Action   = [
                          + "iam:PassRole",
                        ]
                      + Effect   = "Allow"
                      + Resource = "arn:aws:iam::597635706810:role/KarpenterNodeRole-MY-EKS-CLUSTER"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

  # module.karpenter.aws_iam_role.karpenter_controller_role will be created
  + resource "aws_iam_role" "karpenter_controller_role" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + "Version": "2012-10-17",
              + "Statement": [
                + {
                  + "Sid": "AllowScopedEC2InstanceActions",
                  + "Effect": "Allow",
                  + "Resource": [
                    + "arn:aws:ec2:${var.aws_region}::image/*",
                    + "arn:aws:ec2:${var.aws_region}::snapshot/*",
                    + "arn:aws:ec2:${var.aws_region}:*:spot-instances-request/*",
                    + "arn:aws:ec2:${var.aws_region}:*:security-group/*",
                    + "arn:aws:ec2:${var.aws_region}:*:subnet/*",
                    + "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
                  ],
                  + "Action": [
                    + "ec2:RunInstances",
                    + "ec2:CreateFleet"
                  ]
                },
                + {
                  + "Sid": "AllowScopedEC2LaunchTemplateActions",
                  + "Effect": "Allow",
                  + "Resource": "arn:aws:ec2:${var.aws_region}:*:launch-template/*",
                  + "Action": "ec2:CreateLaunchTemplate",
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
                    },
                    + "StringLike": {
                      + "aws:RequestTag/karpenter.sh/provisioner-name": "*"
                    }
                  }
                },
                + {
                  + "Sid": "AllowScopedEC2InstanceActionsWithTags",
                  + "Effect": "Allow",
                  + "Resource": [
                    + "arn:aws:ec2:${var.aws_region}:*:fleet/*",
                    + "arn:aws:ec2:${var.aws_region}:*:instance/*",
                    + "arn:aws:ec2:${var.aws_region}:*:volume/*",
                    + "arn:aws:ec2:${var.aws_region}:*:network-interface/*"
                  ],
                  + "Action": [
                    + "ec2:RunInstances",
                    + "ec2:CreateFleet"
                  ],
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
                    },
                    + "StringLike": {
                      + "aws:RequestTag/karpenter.sh/provisioner-name": "*"
                    }
                  }
                },
                {
                  + "Sid": "AllowScopedResourceCreationTagging",
                  + "Effect": "Allow",
                  + "Resource": [
                    + "arn:aws:ec2:${var.aws_region}:*:fleet/*",
                    + "arn:aws:ec2:${var.aws_region}:*:instance/*",
                    + "arn:aws:ec2:${var.aws_region}:*:volume/*",
                    + "arn:aws:ec2:${var.aws_region}:*:network-interface/*",
                    + "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
                  ],
                  + "Action": "ec2:CreateTags",
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:RequestTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned",
                      + "ec2:CreateAction": [
                        + "RunInstances",
                        + "CreateFleet",
                        + "CreateLaunchTemplate"
                      ]
                    },
                    + "StringLike": {
                      + "aws:RequestTag/karpenter.sh/provisioner-name": "*"
                    }
                  }
                },
                {
                  + "Sid": "AllowMachineMigrationTagging",
                  + "Effect": "Allow",
                  + "Resource": "arn:aws:ec2:${var.aws_region}:*:instance/*",
                  + "Action": "ec2:CreateTags",
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:ResourceTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned",
                      + "aws:RequestTag/karpenter.sh/managed-by": "${var.eks_cluster_name}"
                    },
                    + "StringLike": {
                      + "aws:RequestTag/karpenter.sh/provisioner-name": "*"
                    },
                    + "ForAllValues:StringEquals": {
                      + "aws:TagKeys": [
                        + "karpenter.sh/provisioner-name",
                        + "karpenter.sh/managed-by"
                      ]
                    }
                  }
                },
                + {
                  + "Sid": "AllowScopedDeletion",
                  + "Effect": "Allow",
                  + "Resource": [
                    + "arn:aws:ec2:${var.aws_region}:*:instance/*",
                    + "arn:aws:ec2:${var.aws_region}:*:launch-template/*"
                  ],
                  + "Action": [
                    + "ec2:TerminateInstances",
                    + "ec2:DeleteLaunchTemplate"
                  ],
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:ResourceTag/kubernetes.io/cluster/${var.eks_cluster_name}": "owned"
                    },
                    + "StringLike": {
                      + "aws:ResourceTag/karpenter.sh/provisioner-name": "*"
                    }
                  }
                },
                + {
                  + "Sid": "AllowRegionalReadActions",
                  + "Effect": "Allow",
                  + "Resource": "*",
                  + "Action": [
                    + "ec2:DescribeAvailabilityZones",
                    + "ec2:DescribeImages",
                    + "ec2:DescribeInstances",
                    "ec2:DescribeInstanceTypeOfferings",
                    + "ec2:DescribeInstanceTypes",
                    + "ec2:DescribeLaunchTemplates",
                    + "ec2:DescribeSecurityGroups",
                    + "ec2:DescribeSpotPriceHistory",
                    + "ec2:DescribeSubnets"
                  ],
                  + "Condition": {
                    + "StringEquals": {
                      + "aws:RequestedRegion": "${var.aws_region}"
                    }
                  }
                },
                + {
                  + "Sid": "AllowGlobalReadActions",
                  + "Effect": "Allow",
                  + "Resource": "*",
                  + "Action": [
                    + "pricing:GetProducts",
                    + "ssm:GetParameter"
                  ]
                },
                + {
                  + "Sid": "AllowInterruptionQueueActions",
                  + "Effect": "Allow",
                  + "Resource": "${aws_sqs_queue.karpenter_interruption_queue.arn}",
                  + "Action": [
                    + "sqs:DeleteMessage",
                    + "sqs:GetQueueAttributes",
                    + "sqs:GetQueueUrl",
                    + "sqs:ReceiveMessage"
                  + ]
                },
                + {
                  + "Sid": "AllowPassingInstanceRole",
                  + "Effect": "Allow",
                  + "Resource": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/KarpenterNodeRole-${var.eks_cluster_name}",
                  + "Action": "iam:PassRole",
                  + "Condition": {
                    + "StringEquals": {
                      + "iam:PassedToService": "ec2.amazonaws.com"
                    }
                  }
                },
                + {
                  + "Sid": "AllowAPIServerEndpointDiscovery",
                  + "Effect": "Allow",
                  + "Resource": "arn:aws:eks:${var.aws_region}:${data.aws_caller_identity.current.account_id}:cluster/${var.eks_cluster_name}",
                  + "Action": "eks:DescribeCluster"
                + }
              + ]
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "MY-EKS-CLUSTER-karpenter"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "alpha.eksctl.io/cluster-name"                = "MY-EKS-CLUSTER"
          + "alpha.eksctl.io/eksctl-version"              = "0.112.0"
          + "alpha.eksctl.io/iamserviceaccount-name"      = "karpenter/karpenter"
          + "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = "MY-EKS-CLUSTER"
        }
      + tags_all              = {
          + "alpha.eksctl.io/cluster-name"                = "MY-EKS-CLUSTER"
          + "alpha.eksctl.io/eksctl-version"              = "0.112.0"
          + "alpha.eksctl.io/iamserviceaccount-name"      = "karpenter/karpenter"
          + "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = "MY-EKS-CLUSTER"
        }
      + unique_id             = (known after apply)
    }

  # module.karpenter.aws_iam_role.karpenter_node_role will be created
  + resource "aws_iam_role" "karpenter_node_role" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = [
          + "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
          + "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
          + "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
          + "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
        ]
      + max_session_duration  = 3600
      + name                  = "KarpenterNodeRole-MY-EKS-CLUSTER"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)
    }

  # module.karpenter.aws_iam_role_policy_attachment.karpenter_controller_policy_attachment will be created
  + resource "aws_iam_role_policy_attachment" "karpenter_controller_policy_attachment" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = "MY-EKS-CLUSTER-karpenter"
    }

  # module.karpenter.aws_sqs_queue.karpenter_interruption_queue will be created
  + resource "aws_sqs_queue" "karpenter_interruption_queue" {
      + arn                               = (known after apply)
      + content_based_deduplication       = false
      + deduplication_scope               = (known after apply)
      + delay_seconds                     = 0
      + fifo_queue                        = false
      + fifo_throughput_limit             = (known after apply)
      + id                                = (known after apply)
      + kms_data_key_reuse_period_seconds = 300
      + max_message_size                  = 262144
      + message_retention_seconds         = 300
      + name                              = "MY-EKS-CLUSTER"
      + name_prefix                       = (known after apply)
      + policy                            = jsonencode(
            {
              + Id        = "EC2InterruptionPolicy"
              + Statement = [
                  + {
                      + Action    = "sqs:SendMessage"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "sqs.amazonaws.com",
                              + "events.amazonaws.com",
                            ]
                        }
                      + Resource  = "arn:aws:sqs:ap-northeast-1:597635706810:MY-EKS-CLUSTER"
                    },
                ]
              + Version   = "2008-10-17"
            }
        )
      + receive_wait_time_seconds         = 0
      + redrive_allow_policy              = (known after apply)
      + redrive_policy                    = (known after apply)
      + sqs_managed_sse_enabled           = true
      + tags_all                          = (known after apply)
      + url                               = (known after apply)
      + visibility_timeout_seconds        = 30
    }

  # module.karpenter.helm_release.karpenter will be created
  + resource "helm_release" "karpenter" {
      + atomic                     = false
      + chart                      = "karpenter"
      + cleanup_on_fail            = false
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "karpenter"
      + namespace                  = "karpenter"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "oci://public.ecr.aws/karpenter/karpenter"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "v0.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "controller.clusterEndpoint"
          + value = "https://90E1532254BC69D6CC23A05090B6B69E.gr7.ap-northeast-1.eks.amazonaws.com"
        }
      + set {
          + name  = "controller.clusterName"
          + value = "MY-EKS-CLUSTER"
        }
      + set {
          + name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
          + value = "MY-EKS-CLUSTER-karpenter"
        }
      + set {
          + name  = "settings.aws.defaultInstanceProfile"
          + value = "KarpenterNodeInstanceProfile-MY-EKS-CLUSTER"
        }
      + set {
          + name  = "settings.aws.interruptionQueueName"
          + value = "MY-EKS-CLUSTER"
        }
    }

  # module.karpenter.kubernetes_config_map.aws_auth will be created
  + resource "kubernetes_config_map" "aws_auth" {
      + data = {
          + "mapRoles" = <<-EOT
                - groups:
                  - system:bootstrappers
                  - system:nodes
                  rolearn: arn:aws:iam::597635706810:role/eks-node-group
                  username: system:node:{{EC2PrivateDNSName}}
                - groups:
                  - system:bootstrappers
                  - system:nodes
                  - system:node-proxier
                  rolearn: arn:aws:iam::597635706810:role/eks-fargate-pod-execution-role
                  username: system:node:{{SessionName}}
                
                - rolearn: arn:aws:iam::597635706810:role/KarpenterNodeRole-MY-EKS-CLUSTER
                  username: system:node:{{EC2PrivateDNSName}}
                  groups:
                    - system:bootstrappers
                    - system:nodes
            EOT
          + "mapUsers" = <<-EOT
                - userarn: arn:aws:iam::597635706810:root
                  username: admin
                  groups:
                    - system:masters
            EOT
        }
      + id   = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "aws-auth"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.karpenter.kubernetes_service_account.karpenter will be created
  + resource "kubernetes_service_account" "karpenter" {
      + automount_service_account_token = true
      + default_secret_name             = (known after apply)
      + id                              = (known after apply)

      + metadata {
          + annotations      = (known after apply)
          + generation       = (known after apply)
          + labels           = {
              + "app.kubernetes.io/component"  = "controller"
              + "app.kubernetes.io/managed-by" = "terraform"
              + "app.kubernetes.io/name"       = "karpenter"
            }
          + name             = "karpenter"
          + namespace        = "karpenter"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.karpenter.null_resource.helm_experimental_oci will be created
  + resource "null_resource" "helm_experimental_oci" {
      + id       = (known after apply)
      + triggers = {
          + "random" = (known after apply)
        }
    }

Plan: 20 to add, 0 to change, 1 to destroy.

───────────────────────────────────────────────────────────────────────

Saved the plan to: .plan

To perform exactly these actions, run the following command to apply:
    terraform apply ".plan"
Releasing state lock. This may take a few moments...
  1. 套用計畫檔案 .plan 後,需要套用 Karpenter AWSNodeTemplate 與 Provisioner 設定檔案,請參考 ./modules/my_karpenter/2.create-provision.sh 檔案為範例:
#!/bin/sh

CLUSTER_NAME="MY-EKS-CLUSTER"

# Create AWSNodeTemplate
cat <<EOF | kubectl apply -f -
---
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: default
spec:
  subnetSelector:                             # required
    karpenter.sh/discovery: ${CLUSTER_NAME}
  securityGroupSelector:                      # required, when not using launchTemplate
    karpenter.sh/discovery: ${CLUSTER_NAME}
  tags:
    eks-cluster: ${CLUSTER_NAME}
  blockDeviceMappings:
    - deviceName: /dev/xvda
      ebs:
        volumeType: gp3
        volumeSize: 20Gi
        deleteOnTermination: true
EOF

## SRE-EKS-ARM-spot ##
cat << EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: karpenter-arm-spot
spec:
  # Enables consolidation which attempts to reduce cluster cost by both removing un-needed nodes and down-sizing those
  # that can't be removed.  Mutually exclusive with the ttlSecondsAfterEmpty parameter.
  consolidation:
    enabled: true
  ttlSecondsUntilExpired: 2592000
  requirements:
    - key: "alpha.eksctl.io/nodegroup-name"
      operator: In
      values: ["Karpenter-ARM-spot"]
    - key: "node.kubernetes.io/instance-type"
      operator: In
      values: ["t4g.small"]
    - key: "karpenter.sh/capacity-type" # Defaults to on-demand
      operator: In
      values: ["spot"]
    - key: "kubernetes.io/arch"
      operator: In
      values: ["arm64"]
    - key: "nth/enabled"
      operator: In
      values: ["true"]
    - key: "eks-cluster"
      operator: In
      values: [${CLUSTER_NAME}]
  taints:
    - key: spotInstance
      value: 'true'
      effect: PreferNoSchedule
  providerRef:
    name: default
# 6 xlarge or 3 2xlarge
  limits:
    resources:
      cpu: 24
      memory: 48Gi
  taints:
    - key: armInstance
      value: 'true'
      effect: PreferNoSchedule
  # Karpenter provides the ability to specify a few additional Kubelet args.
  # These are all optional and provide support for additional customization and use cases.
  kubeletConfiguration:
    maxPods: 110
EOF

  1. 另外提供手動安裝 Karpenter的方式,請參考 ./modules/my_karpenter/1.install-karpenter.sh 檔案,有興趣的話可以嘗試此方式安裝看看。

下一篇文章將會展示實作 AWS DynamoDB 之 Terraform 模組。


上一篇
實作 AWS 常用服務之 Terraform 模組系列 - EKS with Node Group 篇
下一篇
實作 AWS 常用服務之 Terraform 模組系列 - DynamoDB 篇
系列文
大家都在用 Terraform 實作 IaC 為什麼不將程式寫得更簡潔易讀呢?30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言